8 research outputs found

    Teledentistry: New Tool to Access Dental Care

    Get PDF
    The Health sector is undergoing dramatic revolution by incorporating the utilization of computers and telecommunications. Its Implications in hospitals and among physicians have gained attention. However, its impact on dentistry is less widely reported. Teledentistry can improve access to dental care as well can be used as a tool for dental education

    A SSIM-Based Approach for Finding Similar Facial Expressions

    No full text

    Facial Expression Based Automatic Album Creation

    No full text
    With simple cost effective imaging solutions being widely available these days, there has been an enormous rise in the number of images consumers have been taking. Due to this increase, searching, browsing and managing images in multi-media systems has become more complex. One solution to this problem is to divide images into albums for meaningful and effective browsing. We propose a novel automated, expression driven image album creation for consumer image management systems. The system groups images with faces having similar expressions into albums. Facial expressions of the subjects are grouped into albums by the Structural Similarity Index measure, which is based on the theory on how easily the human visual system can extract the shape information of a scene. We also propose a search by similar expression, in which the user can create albums by providing example facial expression images. A qualitative analysis of the performance of the system is presented on the basis of a user study

    A SSIM-Based Approach for Finding Similar Facial Expressions

    No full text
    There are various scenarios where finding the most similar expression is the requirement rather than classifying one into discrete, pre-defined classes, for example, for facial expression transfer and facial expression based automatic album generation. This paper proposes a novel method for finding the most similar facial expression. Instead of the regular L2 norm distance, we investigate the use of the Structural SIMilarity (SSIM) metric for similarity comparison as a distance metric in a nearest neighbour unsupervised algorithm. The feature vectors are generated using Active Appearance Models (AAM). We also demonstrate how this technique can be extended and used for finding corresponding facial expression images across two or more subjects, which is useful in applications such as facial animation and automatic expression transfer. Person-independent facial expression performance results are shown on the Multi-PIE, FEEDTUM and AVOZES databases. We also compare the performance of the SSIM metric versus other distance metrics in a nearest neighbour search for finding the most similar facial expression to a given image

    Emotion Recognition Using PHOG and LPQ features

    No full text

    Emotion Recognition Using PHOG and LPQ features

    No full text
    We propose a method for automatic emotion recognition as part of the FERA 2011 competition. The system extracts pyramid of histogram of gradients (PHOG) and local phase quantisation (LPQ) features for encoding the shape and appearance information. For selecting the key frames, K-means clustering is applied to the normalised shape vectors derived from constraint local model (CLM) based face tracking on the image sequences. Shape vectors closest to the cluster centers are then used to extract the shape and appearance features. We demonstrate the results on the SSPNET GEMEP-FERA dataset. It comprises of both person specific and person independent partitions. For emotion classification we use support vector machine (SVM) and largest margin nearest neighbour (LMNN) and compare our results to the pre-computed FERA 2011 emotion challenge baseline

    Facial Performance Transfer via Deformable Models and Parametric Correspondence

    No full text
    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a 'meaningful transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry
    corecore